Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Refactor model.from_dict to use kwargs only for override (Sourcery refactored) #813

Closed
wants to merge 4 commits into from

Conversation

sourcery-ai[bot]
Copy link
Contributor

@sourcery-ai sourcery-ai bot commented Sep 12, 2021

Pull Request #807 refactored by Sourcery.

Since the original Pull Request was opened as a fork in a contributor's
repository, we are unable to create a Pull Request branching from it.

To incorporate these changes, you can either:

  1. Merge this Pull Request instead of the original, or

  2. Ask your contributor to locally incorporate these commits and push them to
    the original Pull Request

    Incorporate changes via command line
    git fetch https://github.com/glotaran/pyglotaran pull/807/head
    git merge --ff-only FETCH_HEAD
    git push

NOTE: As code is pushed to the original Pull Request, Sourcery will
re-run and update (force-push) this Pull Request with new refactorings as
necessary. If Sourcery finds no refactorings at any point, this Pull Request
will be closed automatically.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Help us improve this pull request!

jsnel and others added 3 commits September 11, 2021 23:47
The keyword arguments megacomplex_types and default_megacomplex_type are only used for overwrites in testing

Co-authored-by: Sebastian Weigand <[email protected]>
Those context managers allow easy updating or recreating or the plugin registry for test where you don't want to add plugings to the global registry.
@sourcery-ai sourcery-ai bot requested review from joernweissenborn, jsnel and a team as code owners September 12, 2021 01:53
@sourcery-ai
Copy link
Contributor Author

sourcery-ai bot commented Sep 12, 2021

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.28%.

Quality metrics Before After Change
Complexity 7.24 ⭐ 7.15 ⭐ -0.09 👍
Method Length 101.71 🙂 96.29 🙂 -5.42 👍
Working memory 11.28 😞 11.42 😞 0.14 👎
Quality 52.46% 🙂 52.74% 🙂 0.28% 👍
Other metrics Before After Change
Lines 206 194 -12
Changed files Quality Before Quality After Quality Change
glotaran/builtin/io/yml/yml.py 52.46% 🙂 52.74% 🙂 0.28% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
glotaran/builtin/io/yml/yml.py YmlProjectIo.load_scheme 13 🙂 257 ⛔ 15 😞 34.68% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/builtin/io/yml/yml.py YmlProjectIo.save_result 3 ⭐ 214 ⛔ 11 😞 52.52% 🙂 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

@sourcery-ai sourcery-ai bot closed this Sep 12, 2021
@sourcery-ai sourcery-ai bot deleted the sourcery/pull-807 branch September 12, 2021 01:55
@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch glotaran/pyglotaran/sourcery/pull-807

@github-actions
Copy link
Contributor

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [dc00e6da]       [c888f79a]
     <v0.4.0>                   
-        48.4±1ms       35.7±0.3ms     0.74  BenchmarkOptimize.time_optimize(False, False, False)
-         267±1ms         41.6±2ms     0.16  BenchmarkOptimize.time_optimize(False, False, True)
-        70.7±1ms       59.1±0.4ms     0.84  BenchmarkOptimize.time_optimize(False, True, False)
       72.1±0.8ms         64.0±1ms    ~0.89  BenchmarkOptimize.time_optimize(False, True, True)
       48.3±0.9ms       45.9±0.4ms     0.95  BenchmarkOptimize.time_optimize(True, False, False)
-         269±2ms        88.0±40ms     0.33  BenchmarkOptimize.time_optimize(True, False, True)
         69.1±1ms       72.9±0.9ms     1.05  BenchmarkOptimize.time_optimize(True, True, False)
+      71.6±0.3ms         120±50ms     1.68  BenchmarkOptimize.time_optimize(True, True, True)
             181M             180M     0.99  IntegrationTwoDatasets.peakmem_create_result
             199M             197M     0.99  IntegrationTwoDatasets.peakmem_optimize
-         223±3ms          186±4ms     0.84  IntegrationTwoDatasets.time_create_result
-      4.77±0.03s       1.87±0.06s     0.39  IntegrationTwoDatasets.time_optimize

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants